Sam Altman acknowledges that the "dead internet" theory is becoming a reality.

A recent comment by Sam Altman , CEO of OpenAI, has revived one of the most disturbing ideas about the digital future: that the internet as we knew it may no longer exist . Not in technical terms, but as a human space .
“I never took the dead internet theory very seriously, but it seems there are now a lot of accounts run by language models ,” Altman wrote in X, acknowledging that perhaps the hypothesis that the web will be completely taken over by artificial intelligence bots was not so far-fetched.
The Dead Internet Theory emerged in 2021 as a conspiracy theory : that much of online activity is no longer genuine, but automatically generated .
At the time, the idea seemed far-fetched. But in recent years, with the advancement of language models like ChatGPT and the rampant growth of AI-generated content, that hypothesis has begun to seem increasingly realistic .
The paradox is that the person warning about the possible "death" of the internet is Sam Altman, the most visible figure in the artificial intelligence ecosystem . As CEO of OpenAI, his name is behind the launch of ChatGPT , the tool that accelerated the proliferation of machine-generated content and the multiplication of automated accounts.
In 2024, for the first time, network traffic from bots surpassed that from humans.
The numbers speak for themselves. In 2024, for the first time, online traffic from bots surpassed that from humans . This was confirmed by a report from the cybersecurity firm Imperva , which revealed that 51% of website visits are carried out by automated programs.
These bots don't just replicate content : they interact, rank posts, and in many cases, mimic human behavior with a precision that makes it difficult to distinguish what is real and what is not .
This isn't just happening on social media or forums. The phenomenon is now reaching media outlets , news platforms, e-commerce sites, and search engines .
Since the use of text generators became popular, the number of AI-created articles that manage to climb the Google search results has multiplied. According to data from Originality AI , the presence of artificial texts in the search engine's top results has grown by 400% since ChatGPT's launch.
This type of content, often superficial or misleading, responds to a clear economic logic: the less you rely on humans to generate clicks, the more profitable the business will be . And if users can't tell the difference, everything still works.
The death of the internet, interpreted by an AI image generator.
The problem is that the entire ecosystem is beginning to deteriorate . Online advertising, which underpins much of digital content, is also affected. An Adalytics report revealed that millions of ads were shown to bots instead of real people . There were even cases where Google's advertising systems ended up playing ads to its own bots.
At the same time, the quality of human content is beginning to dilute . Platforms that once promoted creativity and sharing now prioritize quantity and algorithmic optimization. The result is an overproduction of recycled, repetitive information designed more to please the algorithm than to add value.
But the risk isn't just that the content becomes irrelevant, but that digital learning itself begins to deteriorate. Language models like ChatGPT are trained using material circulating on the web.
If too much of that input is artificial , the system risks feeding on its own copy , creating a feedback loop that can affect the quality of future generations of AI.
Added to this is a more subtle but equally worrying consequence: the cultural impact . If the texts, tones, and words most circulated online come from artificial intelligence, the way people write, speak, and even think begins to be shaped by that pattern.
The web ceases to be a reflection of society and becomes a filtered, impoverished and automated version of itself .
Today, much of the content seen on social media, blogs, and even in the news is driven by automated systems that maximize clicks and virality , regardless of the truth or value of what they communicate. Added to this are deepfakes , synthetic influencers, AI-generated videos, and fake news sites created to manipulate public opinion.
What seemed like a dystopia a few years ago is now becoming a concrete possibility: an internet where humans are no longer the protagonists , but rather mere passive consumers of content produced by and for machines. And while there are still spaces for authenticity, it's becoming increasingly difficult to find them amid the artificial noise.
Clarin